Distributed Big-Data Optimization via Blockwise Gradient Tracking
نویسندگان
چکیده
We study distributed big-data nonconvex optimization in multiagent networks. consider the (constrained) minimization of sum a smooth (possibly) function, i.e., agents' sum-utility, plus convex nonsmooth regularizer. Our interest is on problems which there large number variables to optimize. If treated by means standard algorithms, these large-scale may be intractable due prohibitive local computation and communication burden at each node. propose novel solution method where, iteration, agents update an uncoordinated fashion only one block entire decision vector. To deal with nonconvexity cost scheme hinges successive approximation techniques combined blockwise perturbed push-sum consensus protocol, instrumental perform block-averaging operations tracking gradient averages. Asymptotic convergence stationary solutions problem established. Finally, numerical results show effectiveness proposed algorithm highlight how dimension impacts overhead practical speed.
منابع مشابه
Distributed Stochastic Optimization via Adaptive Stochastic Gradient Descent
Stochastic convex optimization algorithms are the most popular way to train machine learning models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the most popular algorithm, Stochastic Gradient Descent (SGD), is a serial algorithm that is surprisingly hard to parallelize. In this paper, we propose an efficient distributed stochastic op...
متن کاملLocal-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.
Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods curr...
متن کاملA Distributed Stochastic Gradient Tracking Method
In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a dis...
متن کاملAnalysis of Tidal Data via the Blockwise Bootstrap
We analyze tidal data from Port Mansseld, TX using Kunsch's (1989) blockwise bootstrap in the regression setting. In particular, we estimate the variability of parameter estimates in a harmonic analysis via block subsampling of residuals from a least squares t. We see that naive least squares variance estimates can be either too large or too small depending on the strength of correlation and th...
متن کاملConvex Optimization for Big Data
This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2021
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2020.3008713